Comparative User Experiences of Next-Generation Catalogue Interfaces

نویسنده

  • Rice Majors
چکیده

One of the presumed advantages of next-generation library catalogue interfaces is that the user experience is improved—that it is both richer and more intuitive. Often the interfaces come with little or no user-facing documentation or imbedded “help” for patrons based on an assumption of ease of use and familiarity of the experience, having followed best practices in use on the Web. While there has been much gray literature (published on library Web sites, etc.) interrogating these implicit claims and contrasting the new interfaces to traditional Web-based catalogues, this article details a consistent and formal comparison of whether users can actually accomplish common library tasks, unassisted, using these interfaces. The author has undertaken a task-based usability test of vendor-provided next-generation catalogue interfaces and Web-scale discovery tools (Encore Synergy, Summon, WorldCat Local, Primo Central, EBSCO Discovery Service). Testing was done with undergraduates across all academic disciplines. The resulting qualitative data, noting any demonstrated trouble using the software as well as feedback or suggested improvements that the users may have about the software, will assist academic libraries in making or validating purchase and subscription decisions for these interfaces as well as help vendors make data-driven decisions about interface and experience enhancements. Introduction This study looks at vendor-provided discovery interfaces (Encore Synergy, EBSCO Discovery Service, Primo Central, Summon, and WorldCat Local) from a user experience standpoint to provide libraries with a means of comparing the relative advantages of the various products (and thus to LIBRARY TRENDS, Vol. 61, No. 1, 2012 (“Losing the Battle for Hearts and Minds? NextGeneration Discovery and Access in Library Catalogues,” edited by Kathryn La Barre), pp. 186–207. © 2012 The Board of Trustees, University of Illinois 11_61_1_majors_186-207.indd 186 8/28/12 9:14 AM 187 next-generation catalogue interfaces/majors justify purchase and/or subscription decisions) and to provide vendors with a methodical source of data for improving their products. The advent of the next-generation library catalogue and subsequently the Web-scale discovery platform has irrevocably changed the consideration of how library catalogue interfaces do and do not satisfy user expectations and user needs (Breeding, 2007; Majors & Mantz, 2011; Nagy, 2011; Vaughan, 2011). Leveraging observations and assumptions about the best practices of successful Web sites like Google, Amazon, and Flickr, and the user behaviors that are assumed to have developed from using those same sites, these interfaces offer a different user experience than that of traditional Web-based library catalogues. The architecture of these discovery platforms is such that they are their own “layer” (including a relatively high degree of informational and potentially transactional interoperability between the discovery platform and an integrated library system) on top of a traditional Web catalogue; this allows a library to implement a discovery layer from one vendor while maintaining an integrated library system from another vendor, vastly increasing the competitiveness of the marketplace for these products as well as increasing the scrutiny with which libraries must evaluate potential discovery solutions. While there has been considerable comparative evaluation (e.g., feature comparison, informal user testing) by librarians of the interfaces, as well as partnerships between library-customers and vendors to assess the user experience of individual products, there has been little formal, rigorous comparative testing among the products with the intention of sharing the results widely for a common good. Even when results of comparative studies are made available (e.g., via library Web sites), the methodologies (and hence the implicit goals) of the comparisons are typically not. Literature Review Denton and Coysh (2011) give an excellent summary of the history of usability testing of discovery interfaces conducted up through the time of their own study. For the purposes of study design, a wide range of formal and informal studies of discovery platforms (Arcolio & Davidson, 2010; Arcolio & Poe, 2008; Hanson, 2009; Keiller, 2010; MIT Libraries, 2008; North Carolina State University, 2008; O’Hara, Nicholls, & Keiller, 2010; Online Computer Library Center [OCLC], 2009b, 2010; Sadeh, 2008) were examined to see where there was consensus about patron activities that were expected to be supported and thus inform the selection and design of tasks. Studies of discovery platforms conducted subsequent to the initial literature review (Ballard & Blaine, 2011; Casserly, Cole, & Waller, 2011; Clancy & Watson, 2010; Denton & Coysh, 2011; Gross & Sheridan, 2011; North Carolina State University, 2011; Serials Solutions, 2011; Tufts University, 2011a, 11_61_1_majors_186-207.indd 187 8/28/12 9:14 AM 188 library trends/summer 2012 2011b; University of Nevada, Las Vegas, 2011; Williams & Foster, 2011; Xavier University, 2011; York St John University, 2010; Youngen, 2010; Yunkin, 2011) reinforced the decisions involving study design. Studies of discovery platforms focusing on only searching within a single discipline (e.g., music, medicine) were not examined. General works covering issues of discovery (Bates, 2003; Calhoun, 2006; Hanson et al., 2009, 2011; Majors & Mantz, 2011; Matthews, 2009; OCLC, 2009a) and the evolution of discovery platforms (Breeding, 2007; Nagy, 2011; Vaughan, 2011; Wang & Lim, 2009) also fed into design of user tasks, as did assessments of discovery platforms that either were not comparative user experience studies or did not disclose details of their comparison process (Cornell University Libraries, 2011; Dartmouth College Library, 2009; De, 2009; Featherstone & Wang, 2009; Fisher et al., 2011; Marmot Library Network, 2011; Philip, 2010; Rowe, 2010, 2011; Tofan, 2009; Yang & Wagner, 2010). Scope of Study For the purposes of this study, test participants were limited to undergraduate students, the largest population of potential novice users in a university setting. Participants were all undergraduate students enrolled at the University of Colorado (see Appendix 1 for demographic data of participants). Current and former employees of the University of Colorado Libraries were not eligible to participate in this study in order to eliminate any possibility of participants having received on-the-job training on library systems, best practices for searching, and/or library jargon. Open-source discovery interfaces were not included in the study in order to allow for a more focused comparison between the (vendor-provided, turnkey) products. It is typical for a library either to be looking only at implementing open-source solutions (because the library has access to software development resources) or only at vendor-provided solutions (because the library does not have such access, or does not wish to allocate those resources toward a discovery interface); this study focused on gathering and analyzing data for the latter group. To limit data collection to current practical product offerings, the range of vendor-provided discovery interfaces was limited to products that would be expected to be proposed by a vendor to an academic libraryissued RFP. Hence, the most recent product developed and marketed by a vendor was included (e.g., Encore Synergy rather than Encore; Summon rather than AquaBrowser). Methodology Because the objectives were to assess existing functional products, taskbased assessment testing (rather than focus groups or card sorts) was cho11_61_1_majors_186-207.indd 188 8/28/12 9:14 AM 189 next-generation catalogue interfaces/majors sen (Rubin, 1994). After reviewing their rights with regard to participation (see Appendix 2), participants were given a script of four common library tasks (see Appendix 3) to complete and were instructed to begin each task in the discovery interface they were testing; participants were also told that it was fine to go beyond the discovery interface if necessary or desirable to complete tasks. Each participant was given the same set of tasks (with small editorial changes to reflect differences in library holdings) and in the same order. In keeping with common practice for task-based testing, participants were asked to “think out loud” about what they were doing and why they were taking various steps. Usability testing software (Morae) recorded both their on-screen actions as well as their face and voice during task completion. Each participant tested only one interface, as having a single participant test multiple interfaces could have caused “learning” from one interface and improving or otherwise modifying one’s performance on subsequent interfaces. As is typical in a competitive market space, there is a lack of consensus among the vendors on what exact array of patron tasks are or should be supported by a discovery interface, thus it was not easy or even necessarily desirable (from an assessment standpoint) to design tasks that could assuredly be completed in every interface. Tasks were instead based on some of the most common undergraduate user activities and therefore in some cases may have tested an interface’s ability to guide the user toward other resources that would allow task completion (e.g., the library’s Web site). Further, many features of the discovery interfaces were either not tested or not explicitly tested (e.g., social Web features like tagging). Each discovery interface was tested by five or six participants in total, a number accepted as sufficient to identify the most significant areas for improvement with respect to usability and user experience (Rubin, 1994). An actual library implementation of each discovery interface was used for testing (see Appendix 4). For the purposes of task completion, participants were instructed to assume that they were undergraduates enrolled at the institution whose interface was being tested. After completing the four tasks, or after forty minutes elapsed without task completion, a survey instrument (see Appendix 5) was used to capture basic demographic data as well as the participant’s impressions of their relative success using the discovery interface and their recommendations for improvements. Analytical Process Participants’ oral comments (made during task completion) along with written responses (in answer to questions on the survey instrument) were transcribed and analyzed for trends and especially consensus of opinion 11_61_1_majors_186-207.indd 189 8/28/12 9:14 AM 190 library trends/summer 2012 or behavior among participants. Actions taken by participants, especially where task completion was difficult or took an unusually long period of time (with respect to other participants testing the same and/or other interfaces) were also analyzed for possible areas of improvement.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Trust Model for B2C E-Commerce Based on 3D User Interfaces

Lack of trust is one of the key bottle necks in e-commerce development. Nowadays many advanced technologies are trying to address the trust issues in e-commerce. One among them suggests using suitable user interfaces. This paper investigates the functionality and capabilities of 3D graphical user interfaces in regard to trust building in the customers of next generation of B2C e-commerce websit...

متن کامل

Natural User Interfaces: Trend in Virtual Interaction

Based on the fundamental constraints of natural way of interacting such as speech, touch, contextual and environmental awareness, immersive 3D experiences-all with a goal of a computer that can see listen, learn talk and act. We drive a set of trends prevailing for the next generation of user interface: Natural User Interface (NUI).New technologies are pushing the boundaries of what is possible...

متن کامل

Faceted Browsing, Dynamic Interfaces, and Exploratory Search: Experiences and Challenges

Introduction The Relation Browser (RB) is a graphical interface for exploring information spaces, developed by the Interaction Design Lab at the University of North Carolina at Chapel Hill for use in research on how to support users’ needs to understand and explore information. In this abstract, we describe the Relation Browser, results of recent studies, and the design goals for the next-gener...

متن کامل

Toward a Software Model and a Specification Language for Next-Generation User Interfaces

As user interfaces evolve from traditional WIMP to ‘reality based interfaces’, developers are faced with a set of new challenges which are not addressed by current software tools and models. This paper discusses these challenges and presents a software model and specification language which are designed to simplify the development of non-WIMP interfaces such as virtual environments. Finally, we...

متن کامل

PRELIMINARY DRAFT Reality-based Interaction: Understanding the Next Generation of User Interfaces

Ubiquitous computing, tangible interfaces, and the spread of computers into a wide range of products and objects that surround us are changing interacting with computers from being a specialized activity, segregated from daily life, to becoming increasingly a part of the ‘‘real world.’’ At the same time, as computers are becoming more a part of the real world, user interfaces seem to be evolvin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Library Trends

دوره 61  شماره 

صفحات  -

تاریخ انتشار 2012